15 research outputs found

    Lynch syndrome: from detection to treatment

    Get PDF
    Lynch syndrome (LS) is an inherited cancer predisposition syndrome associated with high lifetime risk of developing tumours, most notably colorectal and endometrial. It arises in the context of pathogenic germline variants in one of the mismatch repair genes, that are necessary to maintain genomic stability. LS remains underdiagnosed in the population despite national recommendations for empirical testing in all new colorectal and endometrial cancer cases. There are now well-established colorectal cancer surveillance programmes, but the high rate of interval cancers identified, coupled with a paucity of high-quality evidence for extra-colonic cancer surveillance, means there is still much that can be achieved in diagnosis, risk-stratification and management. The widespread adoption of preventative pharmacological measures is on the horizon and there are exciting advances in the role of immunotherapy and anti-cancer vaccines for treatment of these highly immunogenic LS-associated tumours. In this review, we explore the current landscape and future perspectives for the identification, risk stratification and optimised management of LS with a focus on the gastrointestinal system. We highlight the current guidelines on diagnosis, surveillance, prevention and treatment and link molecular disease mechanisms to clinical practice recommendations

    Automated colonoscopy withdrawal phase duration estimation using cecum detection and surgical tasks classification

    Get PDF
    Colorectal cancer is the third most common type of cancer with almost two million new cases worldwide. They develop from neoplastic polyps, most commonly adenomas, which can be removed during colonoscopy to prevent colorectal cancer from occurring. Unfortunately, up to a quarter of polyps are missed during colonoscopies. Studies have shown that polyp detection during a procedure correlates with the time spent searching for polyps, called the withdrawal time. The different phases of the procedure (cleaning, therapeutic, and exploration phases) make it difficult to precisely measure the withdrawal time, which should only include the exploration phase. Separating this from the other phases requires manual time measurement during the procedure which is rarely performed. In this study, we propose a method to automatically detect the cecum, which is the start of the withdrawal phase, and to classify the different phases of the colonoscopy, which allows precise estimation of the final withdrawal time. This is achieved using a Resnet for both detection and classification trained with two public datasets and a private dataset composed of 96 full procedures. Out of 19 testing procedures, 18 have their withdrawal time correctly estimated, with a mean error of 5.52 seconds per minute per procedure

    Polyp detection on video colonoscopy using a hybrid 2D/3D CNN

    Get PDF
    Colonoscopy is the gold standard for early diagnosis and pre-emptive treatment of colorectal cancer by detecting and removing colonic polyps. Deep learning approaches to polyp detection have shown potential for enhancing polyp detection rates. However, the majority of these systems are developed and evaluated on static images from colonoscopies, whilst in clinical practice the treatment is performed on a real-time video feed. Non-curated video data remains a challenge, as it contains low-quality frames when compared to still, selected images often obtained from diagnostic records. Nevertheless, it also embeds temporal information that can be exploited to increase predictions stability. A hybrid 2D/3D convolutional neural network architecture for polyp segmentation is presented in this paper. The network is used to improve polyp detection by encompassing spatial and temporal correlation of the predictions while preserving real-time detections. Extensive experiments show that the hybrid method outperforms a 2D baseline. The proposed architecture is validated on videos from 46 patients and on the publicly available SUN polyp database. A higher performance and increased generalisability indicate that real-world clinical implementations of automated polyp detection can benefit from the hybrid algorithm and the inclusion of temporal information

    Spatio-temporal classification for polyp diagnosis

    Get PDF
    Colonoscopy remains the gold standard investigation for colorectal cancer screening as it offers the opportunity to both detect and resect pre-cancerous polyps. Computer-aided polyp characterisation can determine which polyps need polypectomy and recent deep learning-based approaches have shown promising results as clinical decision support tools. Yet polyp appearance during a procedure can vary, making automatic predictions unstable. In this paper, we investigate the use of spatio-temporal information to improve the performance of lesions classification as adenoma or non-adenoma. Two methods are implemented showing an increase in performance and robustness during extensive experiments both on internal and openly available benchmark datasets

    Identifying key mechanisms leading to visual recognition errors for missed colorectal polyps using eye-tracking technology

    Get PDF
    BACKGROUND AND AIMS: Lack of visual recognition of colorectal polyps may lead to interval cancers. The mechanisms contributing to perceptual variation, particularly for subtle and advanced colorectal neoplasia, has scarcely been investigated. We aimed to evaluate visual recognition errors and provide novel mechanistic insights. METHODS: Eleven participants (7 trainees, 4 medical students) evaluated images from the UCL polyp perception dataset, containing 25 polyps, using eye tracking equipment. Gaze errors were defined as those where the lesion was not observed according to eye tracking technology. Cognitive errors occurred when lesions were observed but not recognised as polyps by participants. A video study was also performed including 39 subtle polyps, where polyp recognition performance was compared with a convolutional neural network (CNN). RESULTS: Cognitive errors occurred more frequently than gaze errors overall (65.6%) , with a significantly higher proportion in trainees (P=0.0264). In the video validation, the CNN detected significantly more polyps than trainees and medical students, with per polyp sensitivities of 79.5%, 30.0% and 15.4% respectively. CONCLUSIONS: Cognitive errors were the most common reason for visual recognition errors. The impact of interventions such as artificial intelligence, particularly on different types of perceptual errors, needs further investigation including potential effects on learning curves. To facilitate future research, a publicly accessible visual perception colonoscopy polyp database was created

    Survey on the perceptions of UK gastroenterologists and endoscopists to artificial intelligence

    Get PDF
    Background and aims: With the potential integration of artificial intelligence (AI) into clinical practice, it is essential to understand end users’ perception of this novel technology. The aim of this study, which was endorsed by the British Society of Gastroenterology (BSG), was to evaluate the UK gastroenterology and endoscopy communities’ views on AI. Methods: An online survey was developed and disseminated to gastroenterologists and endoscopists across the UK. Results: One hundred four participants completed the survey. Quality improvement in endoscopy (97%) and better endoscopic diagnosis (92%) were perceived as the most beneficial applications of AI to clinical practice. The most significant challenges were accountability for incorrect diagnoses (85%) and potential bias of algorithms (82%). A lack of guidelines (92%) was identified as the greatest barrier to adopting AI in routine clinical practice. Participants identified real-time endoscopic image diagnosis (95%) as a research priority for AI, while the most perceived significant barriers to AI research were funding (82%) and the availability of annotated data (76%). Participants consider the priorities for the BSG AI Task Force to be identifying research priorities (96%), guidelines for adopting AI devices in clinical practice (93%) and supporting the delivery of multicentre clinical trials (91%). Conclusion: This survey has identified views from the UK gastroenterology and endoscopy community regarding AI in clinical practice and research, and identified priorities for the newly formed BSG AI Task Force

    Computer aided characterization of early cancer in Barrett's esophagus on i-scan magnification imaging - Multicenter international study

    Get PDF
    BACKGROUND AND AIMS: We aimed to develop a computer aided characterization system that can support the diagnosis of dysplasia in Barrett's esophagus (BE) on magnification endoscopy. METHODS: Videos were collected in high-definition magnification white light and virtual chromoendoscopy with i-scan (Pentax Hoya, Japan) imaging in patients with dysplastic/ non-dysplastic BE (NDBE) from 4 centres. We trained a neural network with a Resnet101 architecture to classify frames as dysplastic or non-dysplastic. The network was tested on three different scenarios: high-quality still images, all available video frames and a selected sequence within each video. RESULTS: 57 different patients each with videos of magnification areas of BE (34 dysplasia, 23 NDBE) were included. Performance was evaluated using a leave-one-patient-out cross-validation methodology. 60,174 (39,347 dysplasia, 20,827 NDBE) magnification video frames were used to train the network. The testing set included 49,726 iscan-3/optical enhancement magnification frames. On 350 high-quality still images the network achieved a sensitivity of 94%, specificity of 86% and Area under the ROC (AUROC) of 96%. On all 49,726 available video frames the network achieved a sensitivity of 92%, specificity of 82% and AUROC of 95%. On a selected sequence of frames per case (total of 11,471 frames) we used an exponentially weighted moving average of classifications on consecutive frames to characterize dysplasia. The network achieved a sensitivity of 92%, specificity of 84% and AUROC of 96% The mean assessment speed per frame was 0.0135 seconds (SD, + 0.006) CONCLUSION: Our network can characterize BE dysplasia with high accuracy and speed on high-quality magnification images and sequence of video frames moving it towards real time automated diagnosis

    A new artificial intelligence system successfully detects and localises early neoplasia in Barrett's esophagus by using convolutional neural networks

    Get PDF
    BACKGROUND AND AIMS: Seattle protocol biopsies for Barrett's Esophagus (BE) surveillance are labour intensive with low compliance. Dysplasia detection rates vary, leading to missed lesions. This can potentially be offset with computer aided detection. We have developed convolutional neural networks (CNNs) to identify areas of dysplasia and where to target biopsy. METHODS: 119 Videos were collected in high-definition white light and optical chromoendoscopy with i-scan (Pentax Hoya, Japan) imaging in patients with dysplastic and non-dysplastic BE (NDBE). We trained an indirectly supervised CNN to classify images as dysplastic/non-dysplastic using whole video annotations to minimise selection bias and maximise accuracy. The CNN was trained using 148,936 video frames (31 dysplastic patients, 31 NDBE, two normal esophagus), validated on 25,161 images from 11 patient videos and tested on 264 iscan-1 images from 28 dysplastic and 16 NDBE patients which included expert delineations. To localise targeted biopsies/delineations, a second directly supervised CNN was generated based on expert delineations of 94 dysplastic images from 30 patients. This was tested on 86 i-scan one images from 28 dysplastic patients. FINDINGS: The indirectly supervised CNN achieved a per image sensitivity in the test set of 91%, specificity 79%, area under receiver operator curve of 93% to detect dysplasia. Per-lesion sensitivity was 100%. Mean assessment speed was 48 frames per second (fps). 97% of targeted biopsy predictions matched expert and histological assessment at 56 fps. The artificial intelligence system performed better than six endoscopists. INTERPRETATION: Our CNNs classify and localise dysplastic Barrett's Esophagus potentially supporting endoscopists during surveillance

    Optical diagnosis of colorectal polyps using convolutional neural networks.

    Get PDF
    Colonoscopy remains the gold standard investigation for colorectal cancer screening as it offers the opportunity to both detect and resect pre-malignant and neoplastic polyps. Although technologies for image-enhanced endoscopy are widely available, optical diagnosis has not been incorporated into routine clinical practice, mainly due to significant inter-operator variability. In recent years, there has been a growing number of studies demonstrating the potential of convolutional neural networks (CNN) to enhance optical diagnosis of polyps. Data suggest that the use of CNNs might mitigate the inter-operator variability amongst endoscopists, potentially enabling a "resect and discard" or "leave in" strategy to be adopted in real-time. This would have significant financial benefits for healthcare systems, avoid unnecessary polypectomies of non-neoplastic polyps and improve the efficiency of colonoscopy. Here, we review advances in CNN for the optical diagnosis of colorectal polyps, current limitations and future directions
    corecore